• The discussion surrounding negligence liability for AI developers emphasizes the need to shift the focus from the AI systems themselves to the individuals responsible for their creation and management. Current regulatory frameworks, such as the EU AI Act, primarily concentrate on the technical attributes of AI systems, often neglecting the human element involved in their development. This oversight allows AI engineers to distance themselves from the consequences of their creations. A negligence-based approach is proposed as a more effective means of holding AI developers accountable, as it directly addresses the actions and decisions of the individuals behind the technology. Negligence law operates on the principle that individuals must act with a certain level of care, and if their failure to do so results in harm to others, they can be held liable. In the context of AI, this means evaluating whether developers have exercised sufficient care in the design, testing, and deployment of their systems. The standard of care is typically defined as "reasonable care," which is determined by what a reasonably prudent person would do in similar circumstances. This standard is flexible and can vary based on the specific context, including the type of AI system and the expertise of the developers involved. The article also explores the challenges of establishing a clear standard of care in the rapidly evolving field of AI. While there is potential for consensus on certain methodologies due to the homogeneity of successful machine learning techniques, the lack of established norms complicates the determination of what constitutes reasonable care. The discussion highlights the importance of developing guidelines and best practices for AI safety, which could serve as benchmarks for evaluating negligence. In addition to the standard of care, the article addresses the limitations of negligence liability, including the necessity of proving actual injury and establishing causation. For a negligence claim to succeed, there must be a demonstrable injury resulting from the defendant's actions, and the causal link between the two must be clear. The complexities of causation in AI cases, particularly when multiple factors contribute to an incident, pose additional challenges for plaintiffs. The duty of care is another critical aspect of negligence law, which requires that a defendant owes a duty to the plaintiff. In the AI context, this duty may be complicated by the nature of the services provided, especially when they are offered for free. Courts may differ in their interpretations of whether AI developers owe a duty of care to all potential users or only to those who are foreseeable victims of their systems. Statutory restrictions can further complicate the landscape of negligence liability. Laws such as Section 230 of the Communications Decency Act provide immunity to online service providers for third-party content, raising questions about the applicability of such protections to AI-generated outputs. The evolving nature of AI technology and its implications for liability necessitate ongoing legislative attention. The article contrasts the negligence framework with other tort theories, such as strict liability, which may offer a more straightforward path to accountability but could also lead to excessive liability without addressing the underlying issues of human conduct. The author suggests that a negligence-based approach, while not a panacea, is a necessary starting point for discussions about AI accountability and safety. Looking ahead, the potential paths for AI liability could involve treating AI developers as ordinary employees or as professionals akin to doctors and lawyers, each with different implications for liability and insurance. The emphasis on human accountability in the development of AI systems is crucial, as it brings attention back to the individuals who design and implement these technologies, ultimately fostering a culture of responsibility within the AI community.